# Lightweight MoE
Phi Mini MoE Instruct GGUF
MIT
Phi-mini-MoE is a lightweight Mixture of Experts (MoE) model suitable for English business and research scenarios, excelling in resource-constrained environments and low-latency scenarios.
Large Language Model English
P
gabriellarson
2,458
1
Tinymixtral 4x248M MoE
Apache-2.0
TinyMixtral-4x248M-MoE is a small language model adopting the Mixture of Experts (MoE) architecture, formed by merging multiple TinyMistral variants, suitable for text generation tasks.
Large Language Model
Transformers

T
Isotonic
1,310
2
Featured Recommended AI Models